3 research outputs found

    Model and Tools for Integrating IoT into Mixed Reality Environments: Towards a Virtual-Real Seamless Continuum

    Get PDF
    International audienceThis paper introduces a new software model and new tools for managing indoor smart environments (smart home, smart building , smart factories, etc.) thanks to MR technologies. Our fully-integrated solution is mainly based on a software modelization of connected objects used to manage them independently from their actual nature: these objects can be simulated or real. Based on this model our goal is to create a continuum between a real smart environment and its 3D digital twin in order to simulate and manipulate it. Therefore, two kinds of tools are introduced to leverage this model. First, we introduce two complementary tools, an AR and a VR one, for the creation of the digital twin of a given smart environment. Secondly, we propose 3D interactions and dedicated metaphors for the creation of automation scenarios in the same VR application. These scenarios are then converted to a Petri-net based model that can be edited later by expert users. Adjusting the parameters of our model allows to navigate on the continuum in order to use the digital twin for simulation, deployment and real/virtual synchronization purposes. These different contributions and their benefits are illustrated thanks to the automation configuration of a room in our lab

    Modèle de conscience situationnelle pour la collaboration distante asymétrique en réalité mixte

    No full text
    Being able to collaborate remotely with other people can provide valuable capabilities in performing tasks that require multiple users to be achieved. Moreover, Mixed Reality (MR) technologies are great tools to develop new kinds of applications with more natural interactions and perception abilities compared to classical desktop setups. In this thesis, we propose to improve remote collaboration using these MR technologies that take advantages of our natural skills to perform tasks in 3D environments. In particular, we focus on asymmetrical aspects involved by these kind of collaboration: roles, point of view (PoV), devices and level of virtuality of the MR application. First, we focus on awareness issues and we propose a generic model able to accurately describe a collaborative MR application taking into account potential asymmetry dimensions. In order to address all these dimensions, we split our final model into two layers that separate real and virtual spaces for each user. In this model, each user can generate different kind of input and receive feedbacks with different meanings in order to maintain their own awareness of the shared Virtual Environment (VE). Then, we conduct an exploratory user study to explore the consequences of asymmetric PoVs and the involvement of users' representation in the level of awareness of others' collaborators. Second, we apply our findings to a remote guiding context that implies a remote guide to help an operator in performing a maintenance task. For this use case, we propose to the expert to use a Virtual Reality (VR) interface in order to help the operator through an Augmented Reality (AR) interface. We contribute to this field by enhancing the expert's perceptual abilities of the remote workspace as well as by providing more natural interactions to guide the operator through not intrusive guiding cues integrated to the real world. Last, we address an even more sensitive situation for awareness in remote collaboration that is virtual co-manipulation. It requires to target a perfect synchronization between collaborators in order to achieve the task efficiently. Thus, the system needs to provide appropriate feedbacks to maintain a high level of awareness, especially about what others are currently doing. In particular, we propose a hybrid co-manipulation technique, inspired from our previous remote guiding use case, that mixes virtual object and other's PoV manipulation in the same time.Etre capable de collaborer à distance avec d'autres personnes peut fournir de précieuses capacités pour effectuer des tâches qui ont besoin de plusieurs utilisateurs pour être accomplies. De plus, les technologies de Réalité Mixte (RM) sont des outils intéressants pour développer de nouveaux types d'applications offrant des interactions et des possibilités de perception plus naturelles comparées aux systèmes classiques. Dans cette thèse, nous proposons d'améliorer la collaboration distante en utilisant ces technologies de RM qui profitent de nos capacités naturelles à effectuer des tâches en environnements 3D. En particulier, nous nous concentrons sur les aspects asymétriques impliqués par ce type de collaboration : les rôles, le point de vue (PdV), les dispositifs et le niveau de virtualité de l'application de RM. Premièrement, nous nous intéressons aux problèmes d'awareness et nous proposons un modèle générique capable de décrire précisément une application de RM collaborative en prenant en compte les potentielles dimensions asymétriques. Afin de traiter toutes ces dimensions, nous séparons notre modèle final en deux niveaux qui distingue espaces réels et virtuels pour chaque utilisateur. Dans ce modèle, chaque utilisateur peut générer différents types d'entrées et recevoir des retours de significations différentes dans le but de maintenir leur propre awareness de l'Environnement Virtuel (EV) partagé. Puis, nous présentons une étude utilisateur exploratoire qui s'intéresse aux conséquences de l'asymétrie des PdVs et aux implications induites par la représentation des utilisateurs sur le niveau d'awareness des autres collaborateurs. Deuxièmement, nous appliquons ces observations dans un contexte de guidage à distance qui implique un guide distant aidant un opérateur à réaliser une tâche de maintenance. Pour ce cas d'usage, nous proposons à l'expert d'utiliser une interface de Réalité Virtuelle (AV) pour aider l'opérateur au travers d'une interface de Réalité Augmentée (RA). Nous contribuons à ce domaine en améliorant les capacités de perception de l'environnement distant par l'expert et en proposant des interactions plus naturelles pour guider l'opérateur au travers d'indications non intrusives et intégrées à son environnement réel. Finalement, nous abordons la tâche de co-manipulation qui est une situation encore plus sensible vis-à-vis de l'awareness en collaboration distante. Cette tâche requiert de viser une synchronisation parfaite entre les collaborateurs pour l'accomplir efficacement. Ainsi, le système doit fournir des retours appropriés pour maintenir un haut niveau d'awareness, spécialement concernant l'activité courante des autres. En particulier, nous proposons une technique de co-manipulation hybride, inspirée de notre cas d'utilisation précédent sur la guidage distant, qui mixe la manipulation d'objet virtuel et du PdV d'un autre utilisateur

    Impact of task constraints on a 3D visuomotor tracking task in virtual reality

    No full text
    Objective: The aim of the present study was to evaluate the impact of different task constraints on the participants' adaptation when performing a 3D visuomotor tracking task in a virtual environment.Methods: Twenty-three voluntary participants were tested with the HTC Vive Pro Eye VR headset in a task that consisted of tracking a virtual target moving in a cube with an effector controlled with the preferred hand. Participants had to perform 120 trials according to three task constraints (i.e., gain, size, and speed), each performed according to four randomized conditions. The target-effector distance and elbow range of movement were measured.Results: The results showed an increase in the distance to the target when the task constraints were the strongest. In addition, a change in movement kinematics was observed, involving an increase in elbow amplitude as task constraints increased. It also appeared that the depth dimension played a major role in task difficulty and elbow amplitude and coupling in the tracking task.Conclusion: This research is an essential step towards characterizing interactions with a 3D virtual environment and showing how virtual constraints can facilitate arm's involvement in the depth dimension
    corecore